6 research outputs found

    A Neurocomputational Model of Grounded Language Comprehension and Production at the Sentence Level

    Get PDF
    While symbolic and statistical approaches to natural language processing have become undeniably impressive in recent years, such systems still display a tendency to make errors that are inscrutable to human onlookers. This disconnect with human processing may stem from the vast differences in the substrates that underly natural language processing in artificial systems versus biological systems. To create a more relatable system, this dissertation turns to the more biologically inspired substrate of neural networks, describing the design and implementation of a model that learns to comprehend and produce language at the sentence level. The model's task is to ground simulated speech streams, representing a simple subset of English, in terms of a virtual environment. The model learns to understand and answer full-sentence questions about the environment by mimicking the speech stream of another speaker, much as a human language learner would. It is the only known neural model to date that can learn to map natural language questions to full-sentence natural language answers, where both question and answer are represented sublexically as phoneme sequences. The model addresses important points for which most other models, neural and otherwise, fail to account. First, the model learns to ground its linguistic knowledge using human-like sensory representations, gaining language understanding at a deeper level than that of syntactic structure. Second, analysis provides evidence that the model learns combinatorial internal representations, thus gaining the compositionality of symbolic approaches to cognition, which is vital for computationally efficient encoding and decoding of meaning. The model does this while retaining the fully distributed representations characteristic of neural networks, providing the resistance to damage and graceful degradation that are generally lacking in symbolic and statistical approaches. Finally, the model learns via direct imitation of another speaker, allowing it to emulate human processing with greater fidelity, thus increasing the relatability of its behavior. Along the way, this dissertation develops a novel training algorithm that, for the first time, requires only local computations to train arbitrary second-order recurrent neural networks. This algorithm is evaluated on its overall efficacy, biological feasibility, and ability to reproduce peculiarities of human learning such as age-correlated effects in second language acquisition

    An External Tabletop Environment for an Interactive Brain Model

    Get PDF
    As an important step towards creating a biologically-inspired model of language learning in the human brain, we have created an external environment in which to embody such a model. It is our hope that the learning in such a model will be enhanced and accelerated by the grounding effects that such an environment can provide. The environment can be easily configured to approximate common tabletop testing environments used in clinical studies of human aphasia patients, which will allow us to draw parallels between said studies and similar computational experiments to be conducted by artificially lesioning our language-learning brain model. This document describes the capabilities and programming interfaces of the external environment and how a computational agent might interact with it

    Fighting Spam with the NeighborhoodWatch DHT

    No full text
    Abstract—In this paper, we present DHTBL, an anti-spam blacklist built upon a novel secure distributed hash table (DHT). We show how DHTBL can be used to replace existing DNS-based blacklists (DNSBLs) of IP addresses of mail relays that forward spam. Implementing a blacklist on a DHT improves resilience to DoS attacks and secures message delivery, when compared to DNSBLs. However, due to the sensitive nature of the blacklist, storing the data in a peer-to-peer DHT would invite attackers to infiltrate the system. Typical DHTs can withstand fail-stop failures, but malicious nodes may provide incorrect routing information, refuse to return published items, or simply ignore certain queries. The NeighborhoodWatch DHT is resilient to malicious nodes and maintains the O(log N) bounds on routing table size and expected lookup time. NeighborhoodWatch depends on two assumptions in order to make these guarantees: (1) the existence of an on-line trusted authority that periodically contacts and issues signed certificates to each node, and (2) for every sequence of k + 1 consecutive nodes in the ID space, at least one is alive and non-malicious. We show how NeighborhoodWatch maintains many of its security properties even when the second assumption is violated. Honest nodes in NeighborhoodWatch can detect malicious behavior and expel the responsible nodes from the DHT. I
    corecore